Goto

Collaborating Authors

 turing machine


Mathematicians spent 2025 exploring the edge of mathematics

New Scientist

In 2025, the edges of mathematics came a little more sharply into view when members of the online Busy Beaver Challenge community closed in on a huge number that threatens to defy the logical underpinnings of the subject. This number is the next in the "Busy Beaver" sequence, a series of ever-larger numbers that emerges from a seemingly simple question - how do we know if a computer program will run forever? To find out, researchers turn to the work of mathematician Alan Turing, who showed that any computer algorithm can be mimicked by imagining a simplified device called a Turing machine. More complex algorithms correspond to Turing machines with larger sets of instructions or, in mathematical parlance, more states. For example BB(1) is 1 and BB(2) is 6, so making the algorithm twice as complex increases its runtime sixfold.


Statistically Meaningful Approximation: a Case Study on Approximating Turing Machines with Transformers

Neural Information Processing Systems

A common lens to theoretically study neural net architectures is to analyze the functions they can approximate. However, the constructions from approximation theory often have unrealistic aspects, for example, reliance on infinite precision to memorize target function values. To address this issue, we propose a formal definition of statistically meaningful approximation which requires the approximating network to exhibit good statistical learnability.


On the Holographic Geometry of Deterministic Computation

Nye, Logan

arXiv.org Artificial Intelligence

Standard simulations of Turing machines suggest a linear relationship between the temporal duration $t$ of a run and the amount of information that must be stored by known simulations to certify, verify, or regenerate the configuration at time $t$. For deterministic multitape Turing machines over a fixed finite alphabet, this apparent linear dependence is not intrinsic: any length-$t$ run can be simulated using $O(\sqrt{t})$ work-tape cells via a Height Compression Theorem for succinct computation trees together with an Algebraic Replay Engine. In this paper we recast that construction in geometric and information-theoretic language. We interpret the execution trace as a spacetime DAG of local update events and exhibit a family of recursively defined holographic boundary summaries such that, along the square-root-space simulation, the total description length of all boundary data stored at any time is $O(\sqrt{t})$. Using Kolmogorov complexity, we prove that every internal configuration has constant conditional description complexity given the appropriate boundary summary and time index, establishing that the spacetime bulk carries no additional algorithmic information beyond its boundary. We express this as a one-dimensional computational area law: there exists a simulation in which the information capacity of the active "holographic screen'' needed to generate a spacetime region of volume proportional to $t$ is bounded by $O(\sqrt{t})$. In this precise sense, deterministic computation on a one-dimensional work tape admits a holographic representation, with the bulk history algebraically determined by data residing on a lower-dimensional boundary screen.


On the Computability of Artificial General Intelligence

Mappouras, Georgios, Rossides, Charalambos

arXiv.org Artificial Intelligence

In recent years we observed rapid and significant advancements in artificial intelligence (A.I.). So much so that many wonder how close humanity is to developing an A.I. model that can achieve human level of intelligence, also known as artificial general intelligence (A.G.I.). In this work we look at this question and we attempt to define the upper bounds, not just of A.I., but rather of any machine-computable process (a.k.a. an algorithm). To answer this question however, one must first precisely define A.G.I. We borrow prior work's definition of A.G.I. [1] that best describes the sentiment of the term, as used by the leading developers of A.I. That is, the ability to be creative and innovate in some field of study in a way that unlocks new and previously unknown functional capabilities in that field. Based on this definition we draw new bounds on the limits of computation. We formally prove that no algorithm can demonstrate new functional capabilities that were not already present in the initial algorithm itself. Therefore, no algorithm (and thus no A.I. model) can be truly creative in any field of study, whether that is science, engineering, art, sports, etc. In contrast, A.I. models can demonstrate existing functional capabilities, as well as combinations and permutations of existing functional capabilities. We conclude this work by discussing the implications of this proof both as it regards to the future of A.I. development, as well as to what it means for the origins of human intelligence.



Supplementary for Turing Completeness of Bounded-Precision Recurrent Neural Networks Stephen Chung

Neural Information Processing Systems

Turing Machine is moving right. The proof is similar to that of Theorem 1 but with more neurons. The general idea of the proof is that the required update can be constructed as a two-step process. In the first step, we apply the equations used in the proof of Theorem 1 for neurons from 1. to 6. Therefore, the equations for neurons from 1. to 6. are The update equations for 1. to 6. are the same as that in the proof of Theorem 1, except: 4. T ape neurons. The proof is similar to that of Theorem 2 but with more neurons.